[go/mysql] Avoid static buffer and use tiered pool in #4182
[go/mysql] Avoid static buffer and use tiered pool in #4182sougou merged 1 commit intovitessio:masterfrom LK4D4:avoid_static_buffer
Conversation
go/bucketpool/bucketpool.go
Outdated
There was a problem hiding this comment.
I benchmarked and there is no difference between New and if i == nil way now.
There was a problem hiding this comment.
there should be at least 1 alloc/op due to the interface{} framing. probably irrelevant for the simplification.
sougou
left a comment
There was a problem hiding this comment.
Initial comments from eyeballing.
go/bucketpool/bucketpool.go
Outdated
There was a problem hiding this comment.
return p.pools[len(p.pools)-1] instead.
There was a problem hiding this comment.
Not sure I understand why :/ it will return last pool which contains buffers smaller than size.
There was a problem hiding this comment.
Oh, even though it shouldn't be reachable...
Maybe panic? :)
go/bucketpool/bucketpool.go
Outdated
There was a problem hiding this comment.
Use an expression instead. It will be complex, but efficient (and worth it). This also means that maxSize is not needed.
And write lots of tests to prove it's correct, especially boundary conditions :).
There was a problem hiding this comment.
I don't understand what "expression" means here :)
There was a problem hiding this comment.
Ooooh, I got it. Sorry :) Will do.
|
I'll clarify my comments in the other PR |
danieltahara
left a comment
There was a problem hiding this comment.
obviously also pending rebase on the other diff.
would be curious to see the delta between bucket pool and here
go/bucketpool/bucketpool.go
Outdated
There was a problem hiding this comment.
there should be at least 1 alloc/op due to the interface{} framing. probably irrelevant for the simplification.
go/mysql/conn.go
Outdated
There was a problem hiding this comment.
i wonder if it makes sense to put "finisher" functions as return values to these? That way it prevents people from forgetting to recycle. same with the write side.
go/mysql/conn.go
Outdated
There was a problem hiding this comment.
you shouldn't need this length check anymore. same with read.
and also BigBuffer can go away (or whatever the last "policy" is)
There was a problem hiding this comment.
True for writes.
but for the reads code is different for large buffers - it reads packets one by one instead of ReadAll.
|
So, here is benchmark results comparing to tiered pool PR: And I don't know what could cause this slowdown... Maybe I'll try to revert to readPacketDirect... Though it's not in profile top20. |
|
What are the benchmarks compared against? |
go/bucketpool/bucketpool.go
Outdated
There was a problem hiding this comment.
Will this work correctly if size was 1 and minSize was 1024?
There was a problem hiding this comment.
Nevermind. I see you set idx to 0 next :)
|
@sougou against #4183 causes performance degradation, especially for medium queries. I have no idea why and kinda losing my mind at this point :) |
|
I think the reason why medium buffer is slower is because the size is just above the maxSize. So, it ends up with fresh allocations for every iteration. I spent some more time digging through the code, and have a few observations. The TL;DR is that we can get rid of all read and write policies.
|
|
Problem is that slowness isn't caused by any changes in code apart from removing buffer allocation from Conn. Literally, if I just allocate unused 16kb []byte - everything is perfect. |
|
Wow great observation about reader/writer. That will make things so much
simpler -- do the header read for read and then get a pooled buffered reader, and
write is straughtforward
|
|
I got confused (by looking at the Pool benchmarks). I see that you're now using 16MB. The buffer thing is weird indeed. We can look at varying the size of the medium query size to see where it jumps. That could give us a clue. I'm wondering if pprof would help for such short runs. |
|
@sougou I tried twice bigger queries with the same result :( |
Just use sync.Pool always Signed-off-by: Alexander Morozov <lk4d4math@gmail.com>
danieltahara
left a comment
There was a problem hiding this comment.
removing my block. we can deprecate big buffers in a follow up
Benchmark results:
I added tiered pool here because without it performance hit is too big (apparently because we use smaller buffer for readOnePacket or something and then when we read query with very high probability buffer from the pool is that smaller buffer).
cc @danieltahara